Leveraging shared learning through Massively Multilingual Models, state-of-the-art machine translation models are often able to adapt to the paucity of data for low-resource languages. However, this performance comes at the cost of significantly bloated models which are not practically deployable. Knowledge Distillation is one popular technique to develop competitive, lightweight models: In this work, we first evaluate its use to compress MT models focusing on languages with extremely limited training data. Through our analysis across 8 languages, we find that the variance in the performance of the distilled models due to their dependence on priors including the amount of synthetic data used for distillation, the student architecture, training hyperparameters and confidence of the teacher models, makes distillation a brittle compression mechanism. To mitigate this, we explore the use of post-training quantization for the compression of these models. Here, we find that while distillation provides gains across some low-resource languages, quantization provides more consistent performance trends for the entire range of languages, especially the lowest-resource languages in our target set.
translated by 谷歌翻译
自然语言推理(NLI)任务通常需要通过多个步骤进行推理才能得出结论。尽管产生此类中间步骤的必要性(而不是摘要说明)获得了大众支持,但尚不清楚如何在不完全端到端的监督以及如何进一步利用此类步骤的情况下生成此类步骤。在这项工作中,我们训练一个序列到序列模型,仅生成下一步给定NLI前提和假设对(以及先前的步骤);然后通过外部知识和符号搜索来增强它,以仅在下一步监督下生成中间步骤。我们通过自动化和人类验证显示了此类生成的步骤的正确性。此外,我们表明,这种生成的步骤可以通过多个公共NLI数据集使用简单的数据增强策略来帮助提高端到端的NLI任务性能。
translated by 谷歌翻译
几乎没有射击转移通常显示出比零射击转移〜\ c​​ite {lauscher2020zero}的大幅增长,这实际上是对基于多语言的基于模型的系统的完全监督和无监督的学习方法之间的一个有用的权衡。本文探讨了选择注释数据的各种策略,这些策略可能会导致更好的几杆转移。所提出的方法依赖于多种措施,例如使用$ n $ gram语言模型,预测性熵和梯度嵌入。我们提出了一种用于序列标记任务的损失嵌入方法,该方法诱导了类似于梯度嵌入的多样性和不确定性采样。评估了建议的数据选择策略,并将最多20种语言的POS标签,NER和NLI任务进行比较。我们的实验表明,基于梯度和损失嵌入的策略始终优于随机数据选择基线,并且随着零拍传输的初始性能而变化。此外,即使模型使用原始特定任务特定标记的训练数据进行了零拍传输的较低比例,提出的方法也显示出改进的相似趋势。
translated by 谷歌翻译
情感分析是NLP中研究最广泛的应用程序之一,但大多数工作都集中在具有大量数据的语言上。我们介绍了尼日利亚的四种口语最广泛的语言(Hausa,Igbo,Nigerian-Pidgin和Yor \'ub \'a)的第一个大规模的人类通知的Twitter情感数据集,该数据集由大约30,000个注释的推文组成(以及每种语言的大约30,000个)(以及14,000尼日利亚猎人),其中包括大量的代码混合推文。我们提出了文本收集,过滤,处理和标记方法,使我们能够为这些低资源语言创建数据集。我们评估了数据集上的预训练模型和转移策略。我们发现特定于语言的模型和语言适应性芬通常表现最好。我们将数据集,训练的模型,情感词典和代码释放到激励措施中,以代表性不足的语言进行情感分析。
translated by 谷歌翻译
自然语言推论(NLI)被认为是测试自然语言理解(NLU)的代表任务。在这项工作中,我们提出了一个可扩展的框架,以集体又分类地测试NLI(以及扩展,NLU)所需的不同逻辑推理能力。受行为测试的动机,我们创建了一个半合成的大型测试台(363模板,363K示例)和提供以下公用事业的相关框架:1)沿17个推理尺寸(包括务实推理)单独测试和分析推理能力,2 )设计实验,以研究交叉能力信息内容(留出一个或者带来一个); 3)合成性使我们能够控制伪影和偏见。从自由形式自然语言模板(使用清单)的自动测试用例实例化的继承的能力,以及明确的功能分类使我们能够扩展到(认知)更难的测试用例,同时改变自然语言的复杂性。通过我们对最先进的NLI系统的分析,我们观察到我们的基准确实很难(即使在额外资源训练中也是如此。一些能力脱颖而出。进一步的细粒度分析和微调实验揭示了对这些能力和模型的更多洞察力 - 支持和扩展之前的观察。在结束时,我们还执行用户学习,调查是否可以使用与他人相比的某些模型更好地利用行为信息。
translated by 谷歌翻译
深层语言语言模型(LMS)如Elmo,BERT及其继任者通过预先训练单个模型来迅速缩放自然语言处理的景观,然后是任务特定的微调。此外,像XLM-R和MBERT这样的这种模型的多语言版本使得有希望的零射击交叉传输导致,可能在许多不足和资源不足的语言中实现NLP应用。由于此初步成功,预先接受的模型被用作“通用语言模型”作为不同任务,域和语言的起点。这项工作通过识别通用模型应该能够扩展的七个维度来探讨“普遍性”的概念,即同样良好或相当良好地执行,在不同的环境中有用。我们概述了当前支持这些维度的模型性能的当前理论和经验结果,以及可能有助于解决其当前限制的扩展。通过这项调查,我们为理解大规模上下文语言模型的能力和限制奠定了基础,并帮助辨别研究差距和未来工作的方向,使这些LMS包含多样化和公平的应用,用户和语言现象。
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
The demonstrated success of transfer learning has popularized approaches that involve pretraining models from massive data sources and subsequent finetuning towards a specific task. While such approaches have become the norm in fields such as natural language processing, implementation and evaluation of transfer learning approaches for chemistry are in the early stages. In this work, we demonstrate finetuning for downstream tasks on a graph neural network (GNN) trained over a molecular database containing 2.7 million water clusters. The use of Graphcore IPUs as an AI accelerator for training molecular GNNs reduces training time from a reported 2.7 days on 0.5M clusters to 1.2 hours on 2.7M clusters. Finetuning the pretrained model for downstream tasks of molecular dynamics and transfer to a different potential energy surface took only 8.3 hours and 28 minutes, respectively, on a single GPU.
translated by 谷歌翻译
By utilizing only depth information, the paper introduces a novel but efficient local planning approach that enhances not only computational efficiency but also planning performances for memoryless local planners. The sampling is first proposed to be based on the depth data which can identify and eliminate a specific type of in-collision trajectories in the sampled motion primitive library. More specifically, all the obscured primitives' endpoints are found through querying the depth values and excluded from the sampled set, which can significantly reduce the computational workload required in collision checking. On the other hand, we furthermore propose a steering mechanism also based on the depth information to effectively prevent an autonomous vehicle from getting stuck when facing a large convex obstacle, providing a higher level of autonomy for a planning system. Our steering technique is theoretically proved to be complete in scenarios of convex obstacles. To evaluate effectiveness of the proposed DEpth based both Sampling and Steering (DESS) methods, we implemented them in the synthetic environments where a quadrotor was simulated flying through a cluttered region with multiple size-different obstacles. The obtained results demonstrate that the proposed approach can considerably decrease computing time in local planners, where more trajectories can be evaluated while the best path with much lower cost can be found. More importantly, the success rates calculated by the fact that the robot successfully navigated to the destinations in different testing scenarios are always higher than 99.6% on average.
translated by 谷歌翻译
目前,自然语言理解(NLU)中最根本的两个挑战是:(a)如何以“正确”的原因确定基于深度学习的模型是否在NLU基准上得分很高;(b)了解这些原因甚至是什么。我们研究了关于两个语言“技能”的阅读理解模型的行为:核心分辨率和比较。我们为从系统中预期的推理步骤提出了一个定义,该系统将“缓慢阅读”,并将其与各种大小的贝特家族的五个模型的行为进行比较,这是通过显着分数和反事实解释观察到的。我们发现,对于比较(而不是核心),基于较大编码器的系统更有可能依靠“正确”的信息,但即使他们在概括方面也很难,表明他们仍然学习特定的词汇模式,而不是比较的一般原则。
translated by 谷歌翻译